The link to my GitHub repository is https://github.com/statnaia/IODS-project
The link to GitHub Pages is https://statnaia.github.io/IODS-project/
Dataset: JYTOPKYS3
- The dataset is an international survey of Approaches to Learning done by Kimmo Vehkalahti in 2014-2015
- The dataset learning2014 consist 166 rows and 7 variables.
#reading the dataset
learning2014 <- read.csv("D:/Desktop/Courses/Data Science/IODS-project/Data/learning2014.csv", sep=" ", header=TRUE)
#checking the structure and dimensions of the dataset
str(learning2014)
## 'data.frame': 166 obs. of 7 variables:
## $ gender : chr "F" "M" "F" "M" ...
## $ Age : int 53 55 49 53 49 38 50 37 37 42 ...
## $ attitude: num 3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
## $ deep : num 3.58 2.92 3.5 3.5 3.67 ...
## $ stra : num 3.38 2.75 3.62 3.12 3.62 ...
## $ surf : num 2.58 3.17 2.25 2.25 2.83 ...
## $ Points : int 25 12 24 10 22 21 21 31 24 26 ...
dim(learning2014)
## [1] 166 7
First we explore the data by constructing scatter plots, PDFs and correlations between the variables by gender. Pink color denotes the information on female participants of the survey, and cyan color denotes the information on males. Number of females is approximately twice larger than the number of males. Most of the respondents are under age 35-40. The boxplots and PDFs that denote Global attitude toward statistics, Deep approach, Surface approach, Strategic approach and Total points look quite similar for both genders. Overall males have somewhat higher scores for attitude that females and vice versa for Surface approach. Surface approach scores are negatively correlated with all other variables.
The correlations between variables are in general quite low, and non-significant in many cases. This also can be seen from the scatter plots, the relationships between variables seem mostly quite random. Surface approach scores are negatively correlated with all other variables, but are significant only for males when correlated with Deep approach and Global attitude toward statistics. On the other hand, variables Global attitude toward statistics and Total points are significantly positively correlated with each other for both males and females.
# access the GGally and ggplot2 libraries
#install.packages("ggplot2")
#install.packages("dplyr")
library(ggplot2)
library(GGally)
## Registered S3 method overwritten by 'GGally':
## method from
## +.gg ggplot2
# create a more advanced plot matrix with ggpairs()
p <- ggpairs(learning2014, mapping = aes(col = gender, alpha = 0.3), lower = list(combo = wrap("facethist", bins = 20)))
# draw the plot
p
Having studied the relationships between variables, it seems that the Global attitude toward statistics might explain the variation in Total points the best. Nevertheless, the use of the two other variables: Strategic approach and Surface approach might improve the model. A summary of a multiple linear regression model is shown below.
# creating a multiple regression model with attitude, strategic learning, and surface learning as explanatory variables
# target variable is Points
my_model <- lm(Points ~ attitude + stra + surf, data = learning2014)
# print out a summary of the model
summary(my_model)
##
## Call:
## lm(formula = Points ~ attitude + stra + surf, data = learning2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.1550 -3.4346 0.5156 3.6401 10.8952
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.0171 3.6837 2.991 0.00322 **
## attitude 3.3952 0.5741 5.913 1.93e-08 ***
## stra 0.8531 0.5416 1.575 0.11716
## surf -0.5861 0.8014 -0.731 0.46563
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared: 0.2074, Adjusted R-squared: 0.1927
## F-statistic: 14.13 on 3 and 162 DF, p-value: 3.156e-08
As the statistical significance is marked by stars (t value and Pr(>|t|) columns), Global attitude toward statistics is in fact significantly positively correlated with exam points, but the other two variables are not. The p-values for these two variables are greater than the .05 value, which is generally accepted to test the significance. Therefore, the null hypothesis is not rejected.
Based on these results, I remove these two parameters and make a new model:
#remove stra and surf and run model again
my_model2 <- lm(Points ~ attitude, data = learning2014)
# print out a summary of the model
summary(my_model2)
##
## Call:
## lm(formula = Points ~ attitude, data = learning2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -16.9763 -3.2119 0.4339 4.1534 10.6645
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.6372 1.8303 6.358 1.95e-09 ***
## attitude 3.5255 0.5674 6.214 4.12e-09 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared: 0.1906, Adjusted R-squared: 0.1856
## F-statistic: 38.61 on 1 and 164 DF, p-value: 4.119e-09
The simple linear model performs better than the multimpe regression model in our case. The residuals for this model are slightly smaller than for the previous model, indicating a better model fit. The model fit is described the value of the Multiple R squared: 0.19, indicating that the model can explain 19 percent of the variance in our dependent variable. In the case of this simple linear regression, this means that differences in attitude explain about a fifth of the variance in exam points.
From the “Residuals vs Fitted” plot we can see, that the relationship between the residuals and the fitted values is quite random, which indicates that the size of the errors is not dependent on the explanatory variable. In the “Normal Q-Q” plot we see that the errors are reasonably normally distributed, and thus fit the normality assumption, and the results in the “Residuals vs Leverage” plot imply, that no single observation has unusually high impact on the model. The model diagnostics show a reasonably good fit to the data.
# draw diagnostic plots
par(mfrow = c(2,2))
plot(my_model2, which = c(1,2,5))
The dataset is based on questionnaires on student achievement in secondary education in two Portuguese schools. The data attributes include student grades, demographic, social and school related features.
The data was combined from two datasets: the dataset that describes students performance in Mathematics and the dataset that describes students performance in the Portuguese language. The alcohol consumption by each student is measured with the variable “alc_use” and high alcohol consumption with the variable “high_use”. If the alcohol consumption has a value more than 2, “high_use” of alcohol is True.
alc <- read.csv("./Data/alc.csv")
colnames(alc)
## [1] "X" "school" "sex" "age" "address"
## [6] "famsize" "Pstatus" "Medu" "Fedu" "Mjob"
## [11] "Fjob" "reason" "guardian" "traveltime" "studytime"
## [16] "schoolsup" "famsup" "activities" "nursery" "higher"
## [21] "internet" "romantic" "famrel" "freetime" "goout"
## [26] "Dalc" "Walc" "health" "alc_use" "high_use"
## [31] "failures" "paid" "absences" "G1" "G2"
## [36] "G3"
Here I study the relationships between high/low alcohol consumption and some of the other variables in the dataset. I choose to study the relationships between alcohol consumption (high_use) and such variables as age, famrel, higher, goout (going out with friends).
My hypotheses for each of them are following:
Age
Age varies between 15-22 years for men and women, mean age is 16.5. Male students who have high alcohol consumption tend to be roughly one year older (mean age 17) than those who have lower alcohol consumption (mean age 16). The situation is vice versa for women. My hypothesis was partially correct, only for men.
The barplot highlights increasing of alcohol consumption from age 15 to 17 for women and high consumption at ages 15-18 for men.
library(dplyr); library(ggplot2)
##
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
summary(alc$age)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 15.00 16.00 17.00 16.58 17.00 22.00
g1 <- ggplot(alc, aes(x = high_use, y = age, col = sex))
g1 + geom_boxplot() + ylab("age") +ggtitle("Student age by high alcohol use and sex")
g2 <- ggplot(data = alc, aes(x = age, fill=high_use))
g2 + geom_bar() + facet_wrap("sex")
Family realtionships
Quality of family relationships varies between 1-5, mean value is 4. It is clearly seen that the quality of relationship within family influences the amount of alcohol consumption. So my hypothesis was correct. From the barplot it can be seen that the family microclimate is somewhat more important for females.
summary(alc$famrel)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 1.000 4.000 4.000 3.935 5.000 5.000
g3 <- ggplot(alc, aes(x = high_use, y = famrel, col = sex))
g3 + geom_boxplot() + ylab("Quality of relationship")+ggtitle("Family realtionships")
g4 <- ggplot(data = alc, aes(x = famrel, fill=sex))
g4 + geom_bar() + facet_wrap("high_use")
Wish to take higher education
Dedicated students tend to consume less alcohol, so the hypothesis was partly right. Nevertheless, the amount of students consuming a lot of alcohol and wanting to enter universities at the same time is surprisingly high, especially for males. So, almost every student wants to get a higher education. Note: there are no females that consume a lot of alcohol and do not want to get higher education.
summary(alc$freetime)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 1.000 3.000 3.000 3.224 4.000 5.000
g7 <- ggplot(alc, aes(x = high_use, y = higher, col = sex))
g7 + geom_boxplot() + ylab("higher education")+ggtitle("Student wants to take higher education")
g8 <- ggplot(data = alc, aes(x = higher, fill=sex))
g8 + geom_bar() + facet_wrap("high_use")
Going out with friends
High usage drinkers are more likely to go out than low usage for both genders. This observation is more prominent for males. So, the hypothesis was right.
summary(alc$freetime)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 1.000 3.000 3.000 3.224 4.000 5.000
g7 <- ggplot(alc, aes(x = high_use, y = goout, col = sex))
g7 + geom_boxplot() + ylab("going out")+ggtitle("Student goes out with friends")
g8 <- ggplot(data = alc, aes(x = goout, fill=sex))
g8 + geom_bar() + facet_wrap("high_use")
#Fitting a logistic regression model
model1 <- glm(high_use ~ age + famrel + higher + goout, data = alc, family = "binomial")
#Printing out a summary of the model
summary(model1)
##
## Call:
## glm(formula = high_use ~ age + famrel + higher + goout, family = "binomial",
## data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.7714 -0.7794 -0.5518 0.9972 2.4016
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -2.44699 2.10067 -1.165 0.24407
## age 0.08397 0.11135 0.754 0.45079
## famrel -0.39393 0.13599 -2.897 0.00377 **
## higheryes -0.83261 0.60855 -1.368 0.17125
## goout 0.76934 0.12021 6.400 1.56e-10 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 452.04 on 369 degrees of freedom
## Residual deviance: 391.47 on 365 degrees of freedom
## AIC: 401.47
##
## Number of Fisher Scoring iterations: 4
The logistic regression shows the statistical relationship between the explanatory variables and the binary high/low alcohol consumption variable.
The summary of the model shows that age and with to take higher education are not statistically significant. But family relationships and going out with friend are significant. Thus, having bad family relationships and going out a lot with friends all increase alcohol consumption.
The factor variable in the model (higher) here shows how with to have higher education affects alcohol consumption. It means a Wald test was performed to test whether the pairwise difference between the coefficients of males and females is different from zero or not. Here it is not significantly different, because practically all students want a higher degree as we saw earlier.
Odds ratios (ORs) and confidence intervals (CIs)
OR <- coef(model1) %>% exp
CI <- confint(model1) %>% exp
## Waiting for profiling to be done...
#Printing out the odds ratios with their confidence intervals
cbind(OR, CI)
## OR 2.5 % 97.5 %
## (Intercept) 0.08655379 0.001375242 5.3046944
## age 1.08759925 0.874151223 1.3541589
## famrel 0.67440091 0.514884010 0.8792122
## higheryes 0.43491074 0.127926535 1.4297848
## goout 2.15834687 1.716083835 2.7521056
The OR for age and higher education are not significant because the confidence intervals contain number 1. Otherwise good family relationship is associated with decreased alcohol use.
The odds of high alcohol consumption for significant variables: 1. students with a good family situation are less likely to drink a lot (as OR<1 -> Exposure associated with lower odds of outcome)
2. students who spend a lot of time with their friends is from 1.7 to 2.8 times higher for students who do not (as OR>1 -> Exposure associated with higher odds of outcome)
#I will explore the predictive power of the model by using only the variables with a statistically significant relationship with alcohol consumption.
#Predicting the probability of high_use
probabilities <- predict(model1, type = "response")
#Adding the predicted probabilities to 'alc'
alc <- mutate(alc, probability = probabilities)
# use the probabilities to make a prediction of high_use
alc <- mutate(alc, prediction = probability > 0.5)
# see the last ten original classes, predicted probabilities, and class predictions
select(alc, failures, absences, sex, high_use, probability, prediction) %>% tail(10)
## failures absences sex high_use probability prediction
## 361 1 7 M TRUE 0.19314246 FALSE
## 362 0 3 M TRUE 0.44938048 FALSE
## 363 0 2 M TRUE 0.07079942 FALSE
## 364 0 4 M TRUE 0.53182804 TRUE
## 365 0 3 M FALSE 0.19604475 FALSE
## 366 0 4 M TRUE 0.44938048 FALSE
## 367 0 0 M FALSE 0.14122760 FALSE
## 368 1 4 M TRUE 0.45450785 FALSE
## 369 2 8 M TRUE 0.35976051 FALSE
## 370 1 0 M FALSE 0.15172201 FALSE
# tabulate the target variable versus the predictions
table(high_use = alc$high_use, prediction = alc$prediction)
## prediction
## high_use FALSE TRUE
## FALSE 238 21
## TRUE 74 37
# graphic visualizing of both the actual values and the predictions
g <- ggplot(alc, aes(x = probability, y = high_use,col=prediction))
g + geom_point()
The model does not do a perfect job with predicting alcohol consumption, as it predicts wrongly approximately every 4th time. Next we compute the total proportion of inaccurately classified individuals (the training error):
#Tabulating the target variable versus the predictions
table(high_use = alc$high_use, prediction = alc$prediction) %>% prop.table() %>% addmargins()
## prediction
## high_use FALSE TRUE Sum
## FALSE 0.64324324 0.05675676 0.70000000
## TRUE 0.20000000 0.10000000 0.30000000
## Sum 0.84324324 0.15675676 1.00000000
As can be seen from plot and from the prediction table, category FALSE compose 70% of the high_use (0.7) and TRUE 30% (0.3). Nevertheless, the model still does a better job than simple guessing.
Loss function
# define a loss function (average prediction error)
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
# the average number of wrong predictions in the alc data
loss_func(class = alc$high_use, prob = 0)
## [1] 0.3
loss_func(class = alc$high_use, prob = 1)
## [1] 0.7
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2567568
The results are in agreement with the previous analysis. The output numbers denote the average number of wrong predictions in the training data. If I define the probability of high_use as zero for each individual, it results in resulting proportion =0.3. The result for probability of high_use=1 is complementary to that. The definition of probability as to the model output gives 0.26 error, which is better than in the first case.
Bonus exercise
10-fold cross-validation on the model
# K-fold cross-validation
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = model1, K = 10)
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2702703
With a prediction error of 0.27 with the test set, my model performs slightly worse than the model introduced in the DataCamp exercise (with an error of ~0.26). Below are two models that performs better.
Super - Bonus exercise
First make a model with a lot of predictors
model2 <- glm(high_use ~ school + sex + age + Pstatus + Medu + Fedu + Mjob + Fjob + reason + nursery + internet + guardian + traveltime + studytime + failures + schoolsup + famsup + paid + activities + higher + romantic + famrel + freetime + goout + health + absences + G1 + G2+ G3, data = alc, family = "binomial")
cv <- cv.glm(data = alc, cost = loss_func, glmfit = model2, K = 10)
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2621622
Using a model with many predictors is not useful since the error rate is higher than for the model with less predictors.
Next model:
my_model5 <- glm(high_use ~ sex + age + internet + guardian + traveltime + studytime + failures + schoolsup + famsup + paid + activities + higher + romantic + famrel + freetime + goout + health + absences + G1 + G2+ G3, data = alc, family = "binomial")
cv <- cv.glm(data = alc, cost = loss_func, glmfit = my_model5, K = 10)
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2459459
The error rate gets smaller when reducing the predictors.
Next model:
my_model6 <- glm(high_use ~ studytime + famsup + activities + goout + absences, data = alc, family = "binomial")
cv <- cv.glm(data = alc, cost = loss_func, glmfit = my_model6, K = 10)
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2297297
The error rate gets smaller when reducing the predictors, especially getting rid of those that have no correlation to high_usage.
The Boston dataset is loaded from the MASS package of R.
This dataset contains Housing values in the suburbs of Boston and has 506 observations and 14 variables, 2 of them are interval and the other ones are numerical.
Description of the dataset can be found here: https://stat.ethz.ch/R-manual/R-devel/library/MASS/html/Boston.html
Variables of the dataset are:
1. ‘crim’ (per capita crime rate by town)
2. ‘zn’ (proportion of residential land zoned for lots over 25,000 sq.ft)
3. ‘indus’ (proportion of non-retail business acres per town)
4. ‘chas’ (Charles River dummy variable (= 1 if tract bounds river; 0 otherwise))
5. ‘nox’ (nitrogen oxides concentration (parts per 10 million))
6. ‘rm’ (average number of rooms per dwelling)
7. ‘age’ (proportion of owner-occupied units built prior to 1940)
8. ‘dis’ (weighted mean of distances to five Boston employment centres)
9. ‘rad’ (index of accessibility to radial highways)
10. ‘tax’ (full-value property-tax rate per $10,000)
11. ‘ptratio’ (pupil-teacher ratio by town)
12. ‘black’ (1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town)
13. ‘lstat’ (lower status of the population (percent))
14. ‘medv’ (median value of owner-occupied homes in $1000s)
library(MASS)
##
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
##
## select
library(dplyr)
data(Boston)
# explore the dataset: dimensions, structure and summary
dim(Boston)
## [1] 506 14
str(Boston)
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08205 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
Summary shows the min, max, and the first, the second (median), and the third quantum of each variable of the dataset.
The dataset has 506 rows and 14 columns.
The variables have very different ranges and they are not comparable with each other, which probably means that standardization is required before the analysis.
Graphical overview of the dataset:
# plot matrix of the variables
pairs(Boston)
The overview is a bit messy but it offers visual information on how the variables are connected to each other: e.g. there is a hyperbolic relationship between ‘nox’ and ‘dis’, between ‘lstat’ and ‘medv’; almost a linear correlation between ‘rm’ nad ‘lstat’.
library(tidyr)
library(corrplot)
## Warning: package 'corrplot' was built under R version 4.1.2
## corrplot 0.92 loaded
# calculate the correlation matrix and round it
cor_matrix<-cor(Boston) %>% round(digits = 2)
# print the correlation matrix
cor_matrix
## crim zn indus chas nox rm age dis rad tax ptratio
## crim 1.00 -0.20 0.41 -0.06 0.42 -0.22 0.35 -0.38 0.63 0.58 0.29
## zn -0.20 1.00 -0.53 -0.04 -0.52 0.31 -0.57 0.66 -0.31 -0.31 -0.39
## indus 0.41 -0.53 1.00 0.06 0.76 -0.39 0.64 -0.71 0.60 0.72 0.38
## chas -0.06 -0.04 0.06 1.00 0.09 0.09 0.09 -0.10 -0.01 -0.04 -0.12
## nox 0.42 -0.52 0.76 0.09 1.00 -0.30 0.73 -0.77 0.61 0.67 0.19
## rm -0.22 0.31 -0.39 0.09 -0.30 1.00 -0.24 0.21 -0.21 -0.29 -0.36
## age 0.35 -0.57 0.64 0.09 0.73 -0.24 1.00 -0.75 0.46 0.51 0.26
## dis -0.38 0.66 -0.71 -0.10 -0.77 0.21 -0.75 1.00 -0.49 -0.53 -0.23
## rad 0.63 -0.31 0.60 -0.01 0.61 -0.21 0.46 -0.49 1.00 0.91 0.46
## tax 0.58 -0.31 0.72 -0.04 0.67 -0.29 0.51 -0.53 0.91 1.00 0.46
## ptratio 0.29 -0.39 0.38 -0.12 0.19 -0.36 0.26 -0.23 0.46 0.46 1.00
## black -0.39 0.18 -0.36 0.05 -0.38 0.13 -0.27 0.29 -0.44 -0.44 -0.18
## lstat 0.46 -0.41 0.60 -0.05 0.59 -0.61 0.60 -0.50 0.49 0.54 0.37
## medv -0.39 0.36 -0.48 0.18 -0.43 0.70 -0.38 0.25 -0.38 -0.47 -0.51
## black lstat medv
## crim -0.39 0.46 -0.39
## zn 0.18 -0.41 0.36
## indus -0.36 0.60 -0.48
## chas 0.05 -0.05 0.18
## nox -0.38 0.59 -0.43
## rm 0.13 -0.61 0.70
## age -0.27 0.60 -0.38
## dis 0.29 -0.50 0.25
## rad -0.44 0.49 -0.38
## tax -0.44 0.54 -0.47
## ptratio -0.18 0.37 -0.51
## black 1.00 -0.37 0.33
## lstat -0.37 1.00 -0.74
## medv 0.33 -0.74 1.00
# visualize the correlation matrix
corrplot(cor_matrix, method="circle", type = "upper", cl.pos = "b", tl.pos = "d", tl.cex = 0.6)
Red dots in the correlation plot denote negative correlations and blue dots - positive ones. The bigger the circle is, the darker the color of the circle, the stronger the correlation between two variables is.
There is a quite strong correlation between the ‘nox’ parameter (nitrogen oxides concentration)and such parameters as ‘age’ (proportion of owner-occupied units built prior to 1940), ‘dis’ (weighted mean of distances to five Boston employment centres), ‘rad’ (index of accessibility to radial highways), ‘tax’ (full-value property-tax rate per $10,000) and ‘lstat’ (lower status of the population (percent)). The nitrogen oxides concentration is positively correlated with the amount of older buildings, proximity of the highways, higher taxes and population welfare. On the other hand, the more the concentration of nitrogen oxides, the less the weighted mean of distances to employment centers (negative correlation).
Next, there is also a relationship between ‘lstat’ and ‘medv’ (median value of owner-occupied homes in $1000s) variables: the lower the status of the population, the less the median cost of the homes in the area, which can be expected. Same logic can be applied to the ‘lstat’ and ‘medv’ relationships with ‘rm’ (average number of rooms per dwelling): the more rooms in the dwelling, the more the median cost of the homes and the less low-income families can afford this dwelling.
Furthermore, ‘rad’ variable is positively correlated with ‘tax’, which means that more tax is applied to those who live closer to the radial highways.
Lastly, there are rather strong correlations between ‘indus’ (proportion of non-retail business acres per town) and ‘nox’, ‘age’, ‘dis’ and ‘tax’. The more industry there is in the town, the more air pollution it produces, the older are the buildings, the less is the distance to these industrial centers and the higher is the tax.
Based on this analysis, it is safe to say that the variables of the dataset are mostly related to each other and it is possible to build a prediction model using the interplay between parameters.
library(GGally)
library(ggplot2)
p <- ggpairs(Boston, lower = list(combo = wrap("facethist", bins = 20)))
p
Only ‘rm’ variable looks like it’s almost normally distributed. Other variables are not distributed normally and have different dimensions.
Therefore, the dataset needs to be scaled.
For the reasons stated above we need to scale the dataset first.
# center and standardize variables
boston_scaled <- scale(Boston)
# summaries of the scaled variables
summary(boston_scaled)
## crim zn indus chas
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563 Min. :-0.2723
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668 1st Qu.:-0.2723
## Median :-0.390280 Median :-0.48724 Median :-0.2109 Median :-0.2723
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150 3rd Qu.:-0.2723
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202 Max. : 3.6648
## nox rm age dis
## Min. :-1.4644 Min. :-3.8764 Min. :-2.3331 Min. :-1.2658
## 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366 1st Qu.:-0.8049
## Median :-0.1441 Median :-0.1084 Median : 0.3171 Median :-0.2790
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059 3rd Qu.: 0.6617
## Max. : 2.7296 Max. : 3.5515 Max. : 1.1164 Max. : 3.9566
## rad tax ptratio black
## Min. :-0.9819 Min. :-1.3127 Min. :-2.7047 Min. :-3.9033
## 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876 1st Qu.: 0.2049
## Median :-0.5225 Median :-0.4642 Median : 0.2746 Median : 0.3808
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058 3rd Qu.: 0.4332
## Max. : 1.6596 Max. : 1.7964 Max. : 1.6372 Max. : 0.4406
## lstat medv
## Min. :-1.5296 Min. :-1.9063
## 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 3.5453 Max. : 2.9865
# change the object to data frame so that it will be easier to use the data
boston_scaled <- as.data.frame(boston_scaled)
class(boston_scaled)
## [1] "data.frame"
The scale (min and max) has changed for all the variables. The means of the variables now is zero.
Now we need to create a categorical variable of the crime rate in the Boston dataset (from the scaled crime rate) using quantiles as the break points.
# summary of the scaled crime rate
summary(boston_scaled$crim)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -0.419367 -0.410563 -0.390280 0.000000 0.007389 9.924110
The min value is -0.42 and the max value is 9.92. The 1. quantile is -0.41, the second is -0.39 and the third is 0.007.
# create a quantile vector of crim and print it
bins <- quantile(boston_scaled$crim)
bins
## 0% 25% 50% 75% 100%
## -0.419366929 -0.410563278 -0.390280295 0.007389247 9.924109610
These are the limits for each category.
# create a categorical variable 'crime'
labels <- c("low", "med_low", "med_high", "high")
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, label=labels)
# look at the table of the new factor crime
table(crime)
## crime
## low med_low med_high high
## 127 126 126 127
127 values have been fall into the first and the last category, 126 elements fall into the second and the third. Values between -0.419 and -0.411 are in category ‘low’. Values between -0.411 and -0.39 are in category ‘med_low’. Values between -0.39 and 0.00739 are in category ‘med_high’. Values between 0.00739 and 9.92 are in category ‘high’.
# remove original crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)
# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)
Here we removed the original variable (crim) from the scaled dataset and added the new categorized variable (crime) to the dataset.
The dataset is prepared now and we can divide the data into training (80%) and testing (20%) sets.
# number of rows in the Boston dataset
n <- nrow(boston_scaled)
# choose randomly 80% of the rows
ind <- sample(n, size = n * 0.8)
# create train set
train <- boston_scaled[ind,]
dim(train)
## [1] 404 14
# create test set
test <- boston_scaled[-ind,]
dim(test)
## [1] 102 14
Train dataset has 404 rows and 14 columns. Test dataset has 102 rows and 14 columns.
Let’s train a Linear Discriminant analysis (LDA) classification model. Categorical crime rate is the target variable and all the other variables in the dataset as predictor variables.
lda.fit <- lda(crime ~ ., data = train)
lda.fit
## Call:
## lda(crime ~ ., data = train)
##
## Prior probabilities of groups:
## low med_low med_high high
## 0.2450495 0.2549505 0.2549505 0.2450495
##
## Group means:
## zn indus chas nox rm age
## low 0.9214253 -0.9369031 -0.153023000 -0.8714964 0.4774098 -0.8745766
## med_low -0.1154996 -0.3049720 -0.004759149 -0.5852917 -0.1310543 -0.3358755
## med_high -0.3690158 0.1945070 0.148137948 0.4043776 0.1316252 0.4181946
## high -0.4872402 1.0171737 0.006051757 1.1028862 -0.4059685 0.8052732
## dis rad tax ptratio black lstat
## low 0.8527471 -0.6976546 -0.7429637 -0.41850413 0.3757102 -0.78360766
## med_low 0.3468539 -0.5425547 -0.4829927 -0.06197587 0.3139390 -0.14194609
## med_high -0.3893260 -0.4076377 -0.2859804 -0.31579960 0.1062698 -0.01462282
## high -0.8670274 1.6375616 1.5136504 0.78011702 -0.8179191 0.88614947
## medv
## low 0.55611727
## med_low -0.01560121
## med_high 0.20565881
## high -0.67736278
##
## Coefficients of linear discriminants:
## LD1 LD2 LD3
## zn 0.065638168 0.63041215 -0.84404783
## indus 0.007942502 -0.40957464 0.41428088
## chas -0.068110146 -0.03862216 0.18642814
## nox 0.504026157 -0.69942232 -1.42313868
## rm -0.070122290 -0.10713481 -0.14515736
## age 0.178211026 -0.28220096 -0.03796747
## dis -0.075839582 -0.21718615 0.07809237
## rad 3.188689749 0.78528679 0.21406185
## tax -0.048198316 0.22545192 0.18978669
## ptratio 0.139117459 0.07232268 -0.33031091
## black -0.137957746 -0.01116230 0.12109622
## lstat 0.300094964 -0.26569566 0.26867508
## medv 0.235121053 -0.36667691 -0.35018322
##
## Proportion of trace:
## LD1 LD2 LD3
## 0.9500 0.0364 0.0136
Prior probabilities of groups: the proportion of training observations in each group. The observations are more or less equally distributed to all the groups (all in the range of 23%-27%, as the numbers change every time we run the analysis and choose randomly the 80% of training data).
Group means denote group center of gravity, the mean of each variable in each group.
Coefficients of linear discriminants are used to form the linear combination of predictor variables that are further used to form the LDA decision rule (LDA provides the coefficient of a linear combination of variables). Proportion of trace is the percentage achieved by each discriminant function.
LD1 seems to be 95.75% whereas the other LDs are not very high, suggesting that the first LDA explains almost all the variability in the dataset.
Next we draw the LDA biplot. The color in the biplot indicates each cluster.
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "orange", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
# target classes as numeric
classes <- as.numeric(train$crime)
# plot the lda results
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 1)
From the plot we can see again that accessibility to radial highways (rad) has the highest LD1 coefficient.
To make the prediction of crime rate we will take the crime classes from the test and save them as correct_classes (so that we can compare to it when testing) and remove the crime variable from the test dataset.
# save the correct classes from test data
correct_classes <- test$crime
class(correct_classes)
## [1] "factor"
# remove the crime variable from test data
test <- dplyr::select(test, -crime)
colnames(test)
## [1] "zn" "indus" "chas" "nox" "rm" "age" "dis"
## [8] "rad" "tax" "ptratio" "black" "lstat" "medv"
There is no longer crime variable in the test dataset.
Next we predict the crime rate and compare the predictions to the correct_classes.
# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)
# cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
## predicted
## correct low med_low med_high high
## low 18 8 2 0
## med_low 4 13 6 0
## med_high 0 11 11 1
## high 0 0 0 28
The predictions of the model are fairly good, the correct predictions numbers for each category which is situated on the diagonal of the table are the highest numbers across the table.
Next we load again the Boston dataset and scale it to get comparable distances.
# load the Boston dataset, scale it and create the euclidean distance matrix
library(MASS)
data('Boston')
boston_scaled <- scale(Boston)
boston_scaled <- as.data.frame(boston_scaled)
dist_eu <- dist(boston_scaled, method = "euclidean", diag = FALSE, upper = FALSE, p = 4)
summary(dist_eu)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.1343 3.4625 4.8241 4.9111 6.1863 14.3970
Euclidean is simple the geometric distance between two points, while Manhattan distance observes the absolute differences between the coordinates of two points.
Let’s calculate the manhattan distance.
dist_man <- dist(boston_scaled, method = "manhattan", diag = FALSE, upper = FALSE, p = 4)
summary(dist_man)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.2662 8.4832 12.6090 13.5488 17.7568 48.8618
Next, we run the k-means algorithm on the dataset. K-means clustering algorithm is an unsupervised method, that assigns observations to groups or clusters based on similarity of the objects. K-means needs the number of clusters as an argument, and the optimal number of clusters needs to be defined. First we run K-means clustering using 8 clusters, each identified by a different color. The plot looks very colourful, but is is obvious that the number of clusters is too big.
# k-means clustering
km <-kmeans(Boston, centers = 8)
# plot the Boston dataset with clusters
pairs(Boston, col = km$cluster)
K-means needs the number of clusters as an argument, and the optimal number of clusters needs to be defined. One way to determine the number of clusters is to look at how the total of within cluster sum of squares (WCSS) behaves when the number of cluster changes. The optimal number of clusters is when the total WCSS drops radically.
# MASS, ggplot2 and Boston dataset are available
set.seed(123)
# determine the number of clusters
k_max <- 10
# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(Boston, k)$tot.withinss})
# visualize the results
library(ggplot2)
qplot(x = 1:k_max, y = twcss, geom = 'line')
It looks like 2 is the optimal number of clusters since the curve changes dramatically on k=2.
Therefore, we will run the k-means analysis with only 2 centroids.
# k-means clustering
km <-kmeans(Boston, centers = 2)
# plot the Boston dataset with clusters
pairs(Boston, col = km$cluster)
So, the optimal number of clusters is 2. More than 2 clusters is abundant. Lets zoom in to have a better look for the analysis.
# zoom in to specific columns
pairs(Boston[1:5], col = km$cluster)
# zoom in to specific columns
pairs(Boston[6:10], col = km$cluster)
# zoom in to specific columns
pairs(Boston[10:14], col = km$cluster)
We can see that clusters denoted by red and black color are distinguishable from each other, which supoorts the idea of having two optimal clusters.
Our previous conclusions based on correlations can be observed here too. The distributions for such pairs as ‘indus’ and ‘nox’, ‘lstat’ and ‘medv’, ‘medv’ and ‘rm’, ‘rm’ and ‘lstat’, ‘dis’ and ‘nox’ prove to be have linear or hyperbolic relationships.
library(MASS)
data('Boston')
boston_scaled <- scale(Boston)
boston_scaled <- as.data.frame(boston_scaled)
boston_scaled <- dplyr::select(boston_scaled, -crim)
n <- 506
ind <- sample(n, size = n * 0.8)
ktrain <- boston_scaled[ind,]
ktest <- boston_scaled[-ind,]
km <-kmeans(ktrain, centers = 4)
#length(km)
lda.fit <- lda(km$cluster ~ . , data = ktrain)
lda.fit
## Call:
## lda(km$cluster ~ ., data = ktrain)
##
## Prior probabilities of groups:
## 1 2 3 4
## 0.2227723 0.2747525 0.3811881 0.1212871
##
## Group means:
## zn indus chas nox rm age dis
## 1 -0.4812850 0.5190046 0.16512651 0.4636541 -0.5324236 0.6083763 -0.5683231
## 2 -0.4872402 1.0403131 -0.02404347 1.0527448 -0.3806158 0.7760818 -0.8247049
## 3 -0.1241766 -0.6522457 0.06002355 -0.5753066 0.4028566 -0.3996920 0.3579442
## 4 2.2643344 -1.1492006 -0.27232907 -1.1976137 0.6718297 -1.4468053 1.6575639
## rad tax ptratio black lstat medv
## 1 -0.5926685 -0.2923421 0.05697847 0.04711926 0.4630253 -0.4278552
## 2 1.6182167 1.5342238 0.80494617 -0.81318997 0.8336384 -0.7146212
## 3 -0.5523147 -0.7449329 -0.40267404 0.36555371 -0.5852032 0.4973671
## 4 -0.6865511 -0.5704092 -0.76752788 0.34774570 -0.9327205 0.7233699
##
## Coefficients of linear discriminants:
## LD1 LD2 LD3
## zn 0.10622983 1.653335114 1.24563338
## indus 0.47940634 -0.167290290 0.50001998
## chas -0.01827653 -0.110962089 0.03150472
## nox -0.06167804 -0.048162595 0.66089368
## rm -0.05933667 0.159892918 -0.18545570
## age 0.10134636 -0.430198895 0.19061438
## dis -0.44615071 0.572352328 -0.21526290
## rad 3.58772420 0.718037376 -2.12731486
## tax 1.07739294 0.878746401 1.01263901
## ptratio 0.37148164 -0.070944851 0.57158684
## black -0.03356943 -0.003761048 -0.08426066
## lstat 0.21415982 0.003520682 0.26131387
## medv -0.06573643 0.162441226 0.03495705
##
## Proportion of trace:
## LD1 LD2 LD3
## 0.8508 0.1216 0.0276
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
classes <- as.numeric(train$crime)
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 1)
In the plot we can see thebiplot for the LDA analysis using the clusters as target classes. The ‘rad’ variable is the most influencial linear separator for the clusters again. Also, ‘zn’ and ‘tax’ are next most influential variables.
Next, we create a matrix product, which is a projection of the data points and make a 3D plot of the columns of the matrix product.
model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 404 13
dim(lda.fit$scaling)
## [1] 13 3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)
# create 3D plot of the columns of the matrix product
library(plotly)
## Warning: package 'plotly' was built under R version 4.1.2
##
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
##
## select
## The following object is masked from 'package:ggplot2':
##
## last_plot
## The following object is masked from 'package:stats':
##
## filter
## The following object is masked from 'package:graphics':
##
## layout
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers')
Now we create 3D plot and color it by the crime variable of the test dataset.
# create 3D plot of the columns of the matrix product
library(plotly)
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color= train$crime)
Finally, we create 3D plot and color it by the clusters of the k-means.
# 3D plot by k means cluster
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color= km$cluster)
The plots above differ only by coloring which highlights specific features. The first plot shows the 3D distribution of the three LDs, colored by the level of crimes. It can be seen that the ‘high’ crime rate is the most defined group that stands further away than most of the points belonging to other categories. The second plot shows the same 3D distribution color coded based on what cluster they belong to. There is no standalone group as in previous plot, the datapoints belong to different clusters without any clear pattern.
The Human Development Index (HDI) dataset originates from the United Nations Development Programme.
The Human Development Index (HDI) is a summary measure of average achievement in key dimensions of human development: a long and healthy life, being knowledgeable and have a decent standard of living.
The HDI is the geometric mean of normalized indices for each of the three dimensions.
“Country” = Country name
“GNI” = Gross National Income per capita
“LifeExp” = Life expectancy at birth
“EdExp” = Expected years of schooling
“MatMortality” = Maternal mortality ratio
“TeenBirthRate” = Adolescent birth rate
“ParlPerc” = Percent.Representation.in.Parliament
“edu_ratio” = Edu2_f / Edu2_m
“lab_ratio” = Labo2_f / Labo2_m
human <- read.csv("./data/human.csv", sep=",", dec = ".", row.names = 1)
summary (human)
## edu_ratio lab_ratio EdExp LifeExp
## Min. :0.1717 Min. :0.1857 Min. : 5.40 Min. :49.00
## 1st Qu.:0.7264 1st Qu.:0.5984 1st Qu.:11.25 1st Qu.:66.30
## Median :0.9375 Median :0.7535 Median :13.50 Median :74.20
## Mean :0.8529 Mean :0.7074 Mean :13.18 Mean :71.65
## 3rd Qu.:0.9968 3rd Qu.:0.8535 3rd Qu.:15.20 3rd Qu.:77.25
## Max. :1.4967 Max. :1.0380 Max. :20.20 Max. :83.50
## GNI MatMortality TeenBirthRate ParlPerc
## Min. : 581 Min. : 1.0 Min. : 0.60 Min. : 0.00
## 1st Qu.: 4198 1st Qu.: 11.5 1st Qu.: 12.65 1st Qu.:12.40
## Median : 12040 Median : 49.0 Median : 33.60 Median :19.30
## Mean : 17628 Mean : 149.1 Mean : 47.16 Mean :20.91
## 3rd Qu.: 24512 3rd Qu.: 190.0 3rd Qu.: 71.95 3rd Qu.:27.95
## Max. :123124 Max. :1100.0 Max. :204.80 Max. :57.50
library(GGally)
library(ggplot2)
library(dplyr)
pairs <- ggpairs(human, mapping = aes(), lower = list(combo = wrap("facethist", bins = 20)))
pairs
In the pairs plots we see that some of the variables are approximately normally distributed (e.g. EdExp: expected years of schooling), while the distribution of other variables is somewhat skewed. Two of them, GNI per capita and maternal mortality are significantly skewed to the right, meaning that most of the values in these variables are low. The ratio of females and males with at least secondary education, expected years of schooling, life expectancy at birth, and Gross National Income per capita are all positively correlated with each other, and negatively correlated with maternal mortality ratio and adolescent birth rate.
#library(tidyverse)
library(corrplot)
# compute the correlation matrix and visualize it with corrplot
cor(human) %>% corrplot(type = "upper")
The correlations can be seen more clearly on a corrplot chart.
Some of the variables in the data are strongly positively or negatively correlated: for instance, maternal mortality has a strong positive correlation with adolescent women giving birth. On the other hand, maternal mortality is strongly negatively correlated with expected education.
Educational expectations and actualities for females are negatively correlated with maternal mortality and adolescent birth.
Meanwhile, the ratio of females and males in the labour force and percentage of female representatives in parliament are not strongly correlated with anything. As expected, Gross National income has a positive correlation to expected length of schooling, life expectancy and ratio of women in higher education.
Summary of correlations:
A strong positive correlation can be seen between TeenBirthRate and MatMortality, EdExp and edu_ratio, EdExp and LifeExp, LifeExp and edu_ratio, LifeExp and EdExp.
A strong negative correlation can be seen between TeenBirthRate and edu_ratio, TeenBirthRate and LifeExp, TeenBirthRate and EdExp, MatMortality and edu_ratio, MatMortality and LifeExp, MatMortality and EdExp.
We can see that it’s a highly intercorrelated dataset, which is perfect for the purposes of the Principal Component Analysis.
pca_human <- prcomp(human)
pca_human
## Standard deviations (1, .., p=8):
## [1] 1.854416e+04 1.855219e+02 2.518701e+01 1.145441e+01 3.766241e+00
## [6] 1.565912e+00 1.912052e-01 1.591112e-01
##
## Rotation (n x k) = (8 x 8):
## PC1 PC2 PC3 PC4
## edu_ratio -5.607472e-06 0.0006713951 -3.412027e-05 -2.736326e-04
## lab_ratio 2.331945e-07 -0.0002819357 5.302884e-04 -4.692578e-03
## EdExp -9.562910e-05 0.0075529759 1.427664e-02 -3.313505e-02
## LifeExp -2.815823e-04 0.0283150248 1.294971e-02 -6.752684e-02
## GNI -9.999832e-01 -0.0057723054 -5.156742e-04 4.932889e-05
## MatMortality 5.655734e-03 -0.9916320120 1.260302e-01 -6.100534e-03
## TeenBirthRate 1.233961e-03 -0.1255502723 -9.918113e-01 5.301595e-03
## ParlPerc -5.526460e-05 0.0032317269 -7.398331e-03 -9.971232e-01
## PC5 PC6 PC7 PC8
## edu_ratio -0.0022935252 2.180183e-02 6.998623e-01 7.139410e-01
## lab_ratio 0.0022190154 3.264423e-02 7.132267e-01 -7.001533e-01
## EdExp 0.1431180282 9.882477e-01 -3.826887e-02 7.776451e-03
## LifeExp 0.9865644425 -1.453515e-01 5.380452e-03 2.281723e-03
## GNI -0.0001135863 -2.711698e-05 -8.075191e-07 -1.176762e-06
## MatMortality 0.0266373214 1.695203e-03 1.355518e-04 8.371934e-04
## TeenBirthRate 0.0188618600 1.273198e-02 -8.641234e-05 -1.707885e-04
## ParlPerc -0.0716401914 -2.309896e-02 -2.642548e-03 2.680113e-03
# rounded percentages of variance captured by each PC
s <- summary(pca_human)
pca_pr <- round(100*s$importance[2,], digits = 1)
# create object pc_lab to be used as axis labels
pc_lab <- paste0(names(pca_pr), " (", pca_pr, "%)")
# draw a biplot of the principal component representation and the original variables
biplot(pca_human, choices = 1:2, cex = c(0.6, 1), col = c("grey40", "deeppink2"), xlab = pc_lab[1], ylab = pc_lab[2], main = (title = "PCA_non-scaled"))
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
We can immediately see from the summary of the model and the plot that the first component takes on 100% of the variance. This is due to the difference in ranges of the variables. The GNI per capita is represented by the longest axis, clearly has the biggest standard deviation. All the arrows are sitting on the same axis as if they are fully correlated.
Based on the results below, principal component analysis doesn’t seem to work with unstandardized data.
Now repeat the analysis, but first standardize data
# standardize the variables
human_std <- scale(human)
summary(human_std)
## edu_ratio lab_ratio EdExp LifeExp
## Min. :-2.8189 Min. :-2.6247 Min. :-2.7378 Min. :-2.7188
## 1st Qu.:-0.5233 1st Qu.:-0.5484 1st Qu.:-0.6782 1st Qu.:-0.6425
## Median : 0.3503 Median : 0.2316 Median : 0.1140 Median : 0.3056
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.5958 3rd Qu.: 0.7350 3rd Qu.: 0.7126 3rd Qu.: 0.6717
## Max. : 2.6646 Max. : 1.6632 Max. : 2.4730 Max. : 1.4218
## GNI MatMortality TeenBirthRate ParlPerc
## Min. :-0.9193 Min. :-0.6992 Min. :-1.1325 Min. :-1.8203
## 1st Qu.:-0.7243 1st Qu.:-0.6496 1st Qu.:-0.8394 1st Qu.:-0.7409
## Median :-0.3013 Median :-0.4726 Median :-0.3298 Median :-0.1403
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.3712 3rd Qu.: 0.1932 3rd Qu.: 0.6030 3rd Qu.: 0.6127
## Max. : 5.6890 Max. : 4.4899 Max. : 3.8344 Max. : 3.1850
pca_human_std <- prcomp(human_std)
pca_human_std
## Standard deviations (1, .., p=8):
## [1] 2.0708380 1.1397204 0.8750485 0.7788630 0.6619563 0.5363061 0.4589994
## [8] 0.3222406
##
## Rotation (n x k) = (8 x 8):
## PC1 PC2 PC3 PC4 PC5
## edu_ratio -0.35664370 0.03796058 -0.24223089 0.62678110 -0.5983585
## lab_ratio 0.05457785 0.72432726 -0.58428770 0.06199424 0.2625067
## EdExp -0.42766720 0.13940571 -0.07340270 -0.07020294 0.1659678
## LifeExp -0.44372240 -0.02530473 0.10991305 -0.05834819 0.1628935
## GNI -0.35048295 0.05060876 -0.20168779 -0.72727675 -0.4950306
## MatMortality 0.43697098 0.14508727 -0.12522539 -0.25170614 -0.1800657
## TeenBirthRate 0.41126010 0.07708468 0.01968243 0.04986763 -0.4672068
## ParlPerc -0.08438558 0.65136866 0.72506309 0.01396293 -0.1523699
## PC6 PC7 PC8
## edu_ratio 0.17713316 0.05773644 0.16459453
## lab_ratio -0.03500707 -0.22729927 -0.07304568
## EdExp -0.38606919 0.77962966 -0.05415984
## LifeExp -0.42242796 -0.43406432 0.62737008
## GNI 0.11120305 -0.13711838 -0.16961173
## MatMortality 0.17370039 0.35380306 0.72193946
## TeenBirthRate -0.76056557 -0.06897064 -0.14335186
## ParlPerc 0.13749772 0.00568387 -0.02306476
# rounded percentages of variance captured by each PC
s_st <- summary(pca_human_std)
pca_pr_st <- round(100*s_st$importance[2,], digits = 1)
# create object pc_lab to be used as axis labels
pc_lab_st <- paste0(names(pca_pr_st), " (", pca_pr_st, "%)")
# draw a biplot of the principal component representation and the original variables
biplot(pca_human_std, choices = 1:2, cex = c(0.6, 1), col = c("grey40", "green"), xlab = pc_lab_st[1], ylab = pc_lab_st[2], main = (title = "PCA_scaled"))
The analysis of the standardized data looks much more reliable. The first component takes 53.6% of the variance, and the second component - 16.2% of the variance. The two components together account for 69.8% of the variance, this number is high enough to use these two first components for the analysis. The countries are distibured throughout the bidimensional space defined by the two principal components. The plot visualizes the relationships of the original features with each other, and with the principal components.
Arrows pointing to the same direction are positive correlation and the closer they are the stronger the correlation is. Arrows pointing to opposite directions identify negative correlation. The angle between a variable and a PC axis can be interpret as the correlation between the two. The length of the arrows are proportional to the standard deviations of the variables.
Same correlations as described earlier can be seen here.
The angle between the feature and a PC axis can be interpreted as the correlation between the two. Here, lab_ratio and ParlPerc (percent representation in Parliament) are contributing to the PC2. The ratio of females and males with at least secondary education, expeted years of schooling, life expectancy at birth, Gross National Income per capita, maternal mortality ratio and adolescent birth rate seem to contribute to the PC1.
Lab_ratio (ratio of labour force participation by sex) and ParlPerc (percent representation in Parliament) are strongly positively correlated. The same is true e.g. for MatMortality (maternal mortality ratio) and TeenBirthRate (adolescent birth rate).
The variance (proportional to the length of the arrows) seems more or less of the same magnitude for different variables.
Interpreting the first two principal component dimensions
Based on these results and how the countries are situated in the biplot, the first principal component seems to capture mostly the wealth of the country. I would suggest to call it ‘wealth’, as because it collects indicators of health, social protection, and economic growth. The variables falling into PC1 describe life expectancy at birth, GNI per capita, maternal maternity ratio, adolescent birth rate, expected years of education, and population with secondary education ratio. All of these parameters support the welfare of a country.
The second component PC2 captures some aspects of gender equality, since it is dealing with workforce gender ratio and the participation rate of women in Parliament, so I suggest to call it ‘equality’.
Dataset description
The Tea dataset, available from the FactoMineR package is described here https://rdrr.io/cran/FactoMineR/man/tea.html. The data concern a questionnaire on tea. They asked to 300 individuals how they drink tea (18 questions), what are their product’s perception (12 questions) and some personal details (4 questions). This is a data frame with 300 rows and 36 columns. Rows represent the individuals, columns represent the different questions. The first 18 questions are active ones, the 19th is a supplementary quantitative variable (the age) and the last variables are supplementary categorical variables.
library(tidyr)
library(FactoMineR)
## Warning: package 'FactoMineR' was built under R version 4.1.2
data(tea)
# dimensions of dataset
dim(tea) #[1] 300 36
## [1] 300 36
# structure of dataset
str(tea)
## 'data.frame': 300 obs. of 36 variables:
## $ breakfast : Factor w/ 2 levels "breakfast","Not.breakfast": 1 1 2 2 1 2 1 2 1 1 ...
## $ tea.time : Factor w/ 2 levels "Not.tea time",..: 1 1 2 1 1 1 2 2 2 1 ...
## $ evening : Factor w/ 2 levels "evening","Not.evening": 2 2 1 2 1 2 2 1 2 1 ...
## $ lunch : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
## $ dinner : Factor w/ 2 levels "dinner","Not.dinner": 2 2 1 1 2 1 2 2 2 2 ...
## $ always : Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
## $ home : Factor w/ 2 levels "home","Not.home": 1 1 1 1 1 1 1 1 1 1 ...
## $ work : Factor w/ 2 levels "Not.work","work": 1 1 2 1 1 1 1 1 1 1 ...
## $ tearoom : Factor w/ 2 levels "Not.tearoom",..: 1 1 1 1 1 1 1 1 1 2 ...
## $ friends : Factor w/ 2 levels "friends","Not.friends": 2 2 1 2 2 2 1 2 2 2 ...
## $ resto : Factor w/ 2 levels "Not.resto","resto": 1 1 2 1 1 1 1 1 1 1 ...
## $ pub : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
## $ Tea : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
## $ How : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
## $ sugar : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
## $ how : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ where : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ price : Factor w/ 6 levels "p_branded","p_cheap",..: 4 6 6 6 6 3 6 6 5 5 ...
## $ age : int 39 45 47 23 48 21 37 36 40 37 ...
## $ sex : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
## $ SPC : Factor w/ 7 levels "employee","middle",..: 2 2 4 6 1 6 5 2 5 5 ...
## $ Sport : Factor w/ 2 levels "Not.sportsman",..: 2 2 2 1 2 2 2 2 2 1 ...
## $ age_Q : Factor w/ 5 levels "15-24","25-34",..: 3 4 4 1 4 1 3 3 3 3 ...
## $ frequency : Factor w/ 4 levels "1/day","1 to 2/week",..: 1 1 3 1 3 1 4 2 3 3 ...
## $ escape.exoticism: Factor w/ 2 levels "escape-exoticism",..: 2 1 2 1 1 2 2 2 2 2 ...
## $ spirituality : Factor w/ 2 levels "Not.spirituality",..: 1 1 1 2 2 1 1 1 1 1 ...
## $ healthy : Factor w/ 2 levels "healthy","Not.healthy": 1 1 1 1 2 1 1 1 2 1 ...
## $ diuretic : Factor w/ 2 levels "diuretic","Not.diuretic": 2 1 1 2 1 2 2 2 2 1 ...
## $ friendliness : Factor w/ 2 levels "friendliness",..: 2 2 1 2 1 2 2 1 2 1 ...
## $ iron.absorption : Factor w/ 2 levels "iron absorption",..: 2 2 2 2 2 2 2 2 2 2 ...
## $ feminine : Factor w/ 2 levels "feminine","Not.feminine": 2 2 2 2 2 2 2 1 2 2 ...
## $ sophisticated : Factor w/ 2 levels "Not.sophisticated",..: 1 1 1 2 1 1 1 2 2 1 ...
## $ slimming : Factor w/ 2 levels "No.slimming",..: 1 1 1 1 1 1 1 1 1 1 ...
## $ exciting : Factor w/ 2 levels "exciting","No.exciting": 2 1 2 2 2 2 2 2 2 2 ...
## $ relaxing : Factor w/ 2 levels "No.relaxing",..: 1 1 2 2 2 2 2 2 2 2 ...
## $ effect.on.health: Factor w/ 2 levels "effect on health",..: 2 2 2 2 2 2 2 2 2 2 ...
# summanry of variables
summary(tea)
## breakfast tea.time evening lunch
## breakfast :144 Not.tea time:131 evening :103 lunch : 44
## Not.breakfast:156 tea time :169 Not.evening:197 Not.lunch:256
##
##
##
##
##
## dinner always home work
## dinner : 21 always :103 home :291 Not.work:213
## Not.dinner:279 Not.always:197 Not.home: 9 work : 87
##
##
##
##
##
## tearoom friends resto pub
## Not.tearoom:242 friends :196 Not.resto:221 Not.pub:237
## tearoom : 58 Not.friends:104 resto : 79 pub : 63
##
##
##
##
##
## Tea How sugar how
## black : 74 alone:195 No.sugar:155 tea bag :170
## Earl Grey:193 lemon: 33 sugar :145 tea bag+unpackaged: 94
## green : 33 milk : 63 unpackaged : 36
## other: 9
##
##
##
## where price age sex
## chain store :192 p_branded : 95 Min. :15.00 F:178
## chain store+tea shop: 78 p_cheap : 7 1st Qu.:23.00 M:122
## tea shop : 30 p_private label: 21 Median :32.00
## p_unknown : 12 Mean :37.05
## p_upscale : 53 3rd Qu.:48.00
## p_variable :112 Max. :90.00
##
## SPC Sport age_Q frequency
## employee :59 Not.sportsman:121 15-24:92 1/day : 95
## middle :40 sportsman :179 25-34:69 1 to 2/week: 44
## non-worker :64 35-44:40 +2/day :127
## other worker:20 45-59:61 3 to 6/week: 34
## senior :35 +60 :38
## student :70
## workman :12
## escape.exoticism spirituality healthy
## escape-exoticism :142 Not.spirituality:206 healthy :210
## Not.escape-exoticism:158 spirituality : 94 Not.healthy: 90
##
##
##
##
##
## diuretic friendliness iron.absorption
## diuretic :174 friendliness :242 iron absorption : 31
## Not.diuretic:126 Not.friendliness: 58 Not.iron absorption:269
##
##
##
##
##
## feminine sophisticated slimming exciting
## feminine :129 Not.sophisticated: 85 No.slimming:255 exciting :116
## Not.feminine:171 sophisticated :215 slimming : 45 No.exciting:184
##
##
##
##
##
## relaxing effect.on.health
## No.relaxing:113 effect on health : 66
## relaxing :187 No.effect on health:234
##
##
##
##
##
The “tea” dataset contains 300 observations and 36 variables describing various habits to drink tea, most of which are strings. Some follow a bimodal distributions, some not.
The dataframe is too big for a meaningful MCA analysis, so here I will take only a part of it for the analysis.
keep_columns <- c("Tea", "How", "how", "sugar", "where", "lunch")
tea_time <- dplyr::select(tea, one_of(keep_columns))
str(tea_time)
## 'data.frame': 300 obs. of 6 variables:
## $ Tea : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
## $ How : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
## $ how : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ sugar: Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
## $ where: Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
## $ lunch: Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
#make a bar plot of this smaller data set
gather(tea_time) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar() + theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 8))
## Warning: attributes are not identical across measure variables;
## they will be dropped
Graphs presented above show that: - people prefer tea in tea bag form - people prefer to drink tea alone - people mostly drink tea outside of the lunch time - there is not so big difference between people who drink tea with sugar and without - people prefer Earl Grey - people prefer to drink in a chain store rather than in a tea shop
MCA analysis
MCA is mainly used for categorical variables as we have in the tea dataset.
# multiple correspondence analysis
mca <- MCA(tea_time, graph = FALSE)
# summary of the model
summary(mca)
##
## Call:
## MCA(X = tea_time, graph = FALSE)
##
##
## Eigenvalues
## Dim.1 Dim.2 Dim.3 Dim.4 Dim.5 Dim.6 Dim.7
## Variance 0.279 0.261 0.219 0.189 0.177 0.156 0.144
## % of var. 15.238 14.232 11.964 10.333 9.667 8.519 7.841
## Cumulative % of var. 15.238 29.471 41.435 51.768 61.434 69.953 77.794
## Dim.8 Dim.9 Dim.10 Dim.11
## Variance 0.141 0.117 0.087 0.062
## % of var. 7.705 6.392 4.724 3.385
## Cumulative % of var. 85.500 91.891 96.615 100.000
##
## Individuals (the 10 first)
## Dim.1 ctr cos2 Dim.2 ctr cos2 Dim.3
## 1 | -0.298 0.106 0.086 | -0.328 0.137 0.105 | -0.327
## 2 | -0.237 0.067 0.036 | -0.136 0.024 0.012 | -0.695
## 3 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 4 | -0.530 0.335 0.460 | -0.318 0.129 0.166 | 0.211
## 5 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 6 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 7 | -0.369 0.162 0.231 | -0.300 0.115 0.153 | -0.202
## 8 | -0.237 0.067 0.036 | -0.136 0.024 0.012 | -0.695
## 9 | 0.143 0.024 0.012 | 0.871 0.969 0.435 | -0.067
## 10 | 0.476 0.271 0.140 | 0.687 0.604 0.291 | -0.650
## ctr cos2
## 1 0.163 0.104 |
## 2 0.735 0.314 |
## 3 0.062 0.069 |
## 4 0.068 0.073 |
## 5 0.062 0.069 |
## 6 0.062 0.069 |
## 7 0.062 0.069 |
## 8 0.735 0.314 |
## 9 0.007 0.003 |
## 10 0.643 0.261 |
##
## Categories (the 10 first)
## Dim.1 ctr cos2 v.test Dim.2 ctr cos2
## black | 0.473 3.288 0.073 4.677 | 0.094 0.139 0.003
## Earl Grey | -0.264 2.680 0.126 -6.137 | 0.123 0.626 0.027
## green | 0.486 1.547 0.029 2.952 | -0.933 6.111 0.107
## alone | -0.018 0.012 0.001 -0.418 | -0.262 2.841 0.127
## lemon | 0.669 2.938 0.055 4.068 | 0.531 1.979 0.035
## milk | -0.337 1.420 0.030 -3.002 | 0.272 0.990 0.020
## other | 0.288 0.148 0.003 0.876 | 1.820 6.347 0.102
## tea bag | -0.608 12.499 0.483 -12.023 | -0.351 4.459 0.161
## tea bag+unpackaged | 0.350 2.289 0.056 4.088 | 1.024 20.968 0.478
## unpackaged | 1.958 27.432 0.523 12.499 | -1.015 7.898 0.141
## v.test Dim.3 ctr cos2 v.test
## black 0.929 | -1.081 21.888 0.382 -10.692 |
## Earl Grey 2.867 | 0.433 9.160 0.338 10.053 |
## green -5.669 | -0.108 0.098 0.001 -0.659 |
## alone -6.164 | -0.113 0.627 0.024 -2.655 |
## lemon 3.226 | 1.329 14.771 0.218 8.081 |
## milk 2.422 | 0.013 0.003 0.000 0.116 |
## other 5.534 | -2.524 14.526 0.197 -7.676 |
## tea bag -6.941 | -0.065 0.183 0.006 -1.287 |
## tea bag+unpackaged 11.956 | 0.019 0.009 0.000 0.226 |
## unpackaged -6.482 | 0.257 0.602 0.009 1.640 |
##
## Categorical variables (eta2)
## Dim.1 Dim.2 Dim.3
## Tea | 0.126 0.108 0.410 |
## How | 0.076 0.190 0.394 |
## how | 0.708 0.522 0.010 |
## sugar | 0.065 0.001 0.336 |
## where | 0.702 0.681 0.055 |
## lunch | 0.000 0.064 0.111 |
Analysis of the summary output:
Eigenvalues: the variances and the percentages of variances retained by each dimension
- there are 11 dimensions
- there is a decending order of the explained variance (in %) by each dimensions. For e.g Dim1 has explained up to 15.2 % of the variance while Dim11 explained only 3.4% of the variance. - only four first dimensions retains >10% of the variance, accumulating 51.8% of the variance.
Individuals: only first 10 individuals (rows) are shown
- summary result showed the contribution (ctr) of each individual on producing dimensions. And we can say that Dim1 and Dim2 are mainly influenced by individual 4 and 9 respectively. - cos2 represent the quality of individual on dimensions. If cos2 is closer to 1 then we can say that individual is well projected to the dimensions and in our case individual 4 and 9 are well representing the Dim1 and Dim2 respectively.
Categories table shows:
- the coordinates of the variable categories
- the contribution (%)
- the cos2 (squared correlations)
- V-test shows the significance of active categorical variables with respect to zero. If v-test is between -2 to 2 then categories as coordinate in not significantly different than zero, if v-test is >2 if categories is significantly greater than zero and is < -2 if categories is significantly less than zero. From this, we can say that for Dim1 black, green, lemon, tea bag+unpackaged, and unpackaged categories are greater than zero. Earl Grey, milk, tea bag are less than zero. Whereas, alone and other do not significantly differ from zero. - we can see that strongest effect seem to be on the variable package where v.test values are more than 12.
Categorical variables: - the influence of each categorical variable in dimensions
- values close to 1 indicates a strong link with variable and dimension
- in this table the highest value 0.708 is for “how” on Dim.1 (var ‘how’ describes packaging of the tea). Dim 1 is also strongly influenced by variable “where” (0.702). Dim.2 is influensed mostly by the same variables as well.
Graphical output:
# visualize MCA
plot(mca, invisible=c("ind"), habillage = "quali", graph.type = "classic")
Analysis of the biplot:
The plot shows individual variable categories in relation to dimensions 1 and 2. The first dimension accounts for 15.2 % and the second dimension for 14.2 % of the total inertia. The first dimension seems to be related to the packaging of the tea and where it is bought. On one end there are such variables as tea bags and tea from chain stores and on the other end there is unpackacked tea and tea shops. The second dimension seems to describe the tea type and tea supplements. Categories of the same variable are colored with the same color. The distance between the variable categories gives a measure of their similarity. For example in the bottom right corner we see that people who use unpacked tea buy their tea from a tea shop rather than chain store.
Other MCA plotting options
Explore options to present MCA output. Control automatically the color of individuals.
Here I reduced the font sizes by adding cex command and also gave the command to plot the 10 most contributing individual to dimensions. I plotted the quality of individuals (cos2 greater than 0.1).
plot(mca, invisible=c("quali.sup"), cex=.8, selectMod = "cos2 0.1", select = "contrib 10")
The following biplot represents the individuals by their cos2 values.
library("factoextra")
## Warning: package 'factoextra' was built under R version 4.1.2
## Welcome! Want to learn more? See two factoextra-related books at https://goo.gl/ve3WBa
fviz_mca_ind(mca, col.ind = "cos2", repel = TRUE)
## Warning: ggrepel: 181 unlabeled data points (too many overlaps). Consider
## increasing max.overlaps
The last biplot highlight groups of tea consumption during lunch time and outside of it. I also added concentration ellipses based on the individual groups.
if (FALSE) {
# You can also control the transparency
# of the color by the cos2
fviz_mca_ind(mca, alpha.ind="cos2")
}
# Color individuals by groups, add concentration ellipses
# Remove labels: label = "none".
grp <- as.factor(tea_time[, "lunch"])
p <- fviz_mca_ind(mca, label="none", habillage=grp,
addEllipses=TRUE, ellipse.level=0.95)
print(p)
Different representation of MCA output allows to highlight specific features of the dataset as it requires for the analysis.
(more chapters to be added similarly as we proceed with the course!)